-
Notifications
You must be signed in to change notification settings - Fork 747
[DRAFT] Try make quantize kv cache work #6926
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/6926
Note: Links to docs will display an error until the docs builds have been completed. ❗ 2 Active SEVsThere are 2 currently active SEVs. If your PR is affected, please view them below:
❌ 13 New FailuresAs of commit a3daba3 with merge base b4ab76f ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
| # if self.use_kv_cache: | ||
| # print("Setting up KV cache on the model...") | ||
| # self.model_.setup_caches( | ||
| # batch_size=1, | ||
| # dtype=self.dtype, | ||
| # decoder_max_seq_len=self.max_seq_len, | ||
| # ) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Need to do this because source transform happens after the model is set up, and we need to call the new swapped-in ET attention's setup_cache function. So we move the setup_cache to after the source transform
| if args.model in TORCHTUNE_DEFINED_MODELS: | ||
| if args.use_kv_cache: | ||
| print("Setting up the KV cache...") | ||
| model_manager.model.setup_caches( | ||
| batch_size=1, | ||
| dtype=dtype_override.to_torch_dtype(), | ||
| decoder_max_seq_len=args.max_seq_length, | ||
| ) | ||
| return model_manager |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Setup_cache moved here
| batch_size=batch_size, | ||
| max_seq_len=max_seq_len, | ||
| num_kv_heads=self.num_kv_heads, | ||
| # self.kv_cache = InferenceKVCache( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you try adding from executorch.extension.llm.custom_ops import * here and see if that works?
Summary
[PLEASE REMOVE] See CONTRIBUTING.md's Pull Requests for ExecuTorch PR guidelines.
[PLEASE REMOVE] If this PR closes an issue, please add a
Fixes #<issue-id>line.[PLEASE REMOVE] If this PR introduces a fix or feature that should be the upcoming release notes, please add a "Release notes: " label. For a list of available release notes labels, check out CONTRIBUTING.md's Pull Requests.
Test plan
[PLEASE REMOVE] How did you test this PR? Please write down any manual commands you used and note down tests that you have written if applicable.